Current Issue : October-December Volume : 2025 Issue Number : 4 Articles : 5 Articles
Autonomous driving technology is a current research hotspot in the fields of artificial intelligence and computer vision. Its core relies on environmental information obtained from sensors such as cameras and radars. Image processing technology plays a crucial role in autonomous driving, including tasks such as lane detection, obstacle recognition, and environmental perception. With the rapid development of autonomous driving technology, the demand for image processing systems has significantly increased, especially in terms of real-time performance, accuracy, and multifunctionality. Existing image processing tools are mostly single-functional, making it difficult to meet the complex and varied demands of autonomous driving scenarios. Therefore, developing a system that integrates multiple image processing functions can effectively enhance the environmental perception capabilities of autonomous driving systems and provide reliable data support for subsequent path planning and decision-making. This study developed a multifunctional image processing system based on C, focusing on the system's architecture, module division, and algorithm implementation. Experimental results show that the system can effectively improve the environmental perception capabilities of autonomous driving systems and perform well in terms of processing efficiency and user satisfaction....
Predicting the motion of handwritten digits in video sequences is challenging due to complex spatiotemporal dependencies, variable writing styles, and the need to preserve fine-grained visual details—all of which are essential for real-time handwriting recognition and digital learning applications. In this context, our study aims to develop a robust predictive framework that can accurately forecast digit trajectories while preserving structural integrity. To address these challenges, we propose a novel video prediction architecture integrating ConvCARU with a modified DCGAN to effectively separate the background from the foreground. This ensures the enhanced extraction and preservation of spatial and temporal features through convolution-based gating and adaptive fusion mechanisms. Based on extensive experiments conducted on the MNIST dataset, which comprises 70 K pixel images, our approach achieves an SSIM of 0.901 and a PSNR of 29.31 dB. This reflects a statistically significant improvement in PSNR of +0.20 dB (p < 0.05) compared to current state-of-the-art models, thus demonstrating its superior capability in maintaining consistent structural fidelity in predicted video frames. Furthermore, our framework performs better in terms of computational efficiency, with lower memory consumption compared to most other approaches. This underscores its practicality for deployment in real-time, resourceconstrained applications. These promising results consequently validate the effectiveness of our integrated ConvCARU–DCGAN approach in capturing fine-grained spatiotemporal dependencies, positioning it as a compelling solution for enhancing video-based handwriting recognition and sequence forecasting. This paves the way for its adoption in diverse applications requiring high-resolution, efficient motion prediction....
Image dehazing is a critical task in image restoration, aiming to retrieve clear images from hazy scenes. This process is vital for various applications, including machine recognition, security monitoring, and aerial photography. Current dehazing algorithms often encounter challenges in multi-scale feature extraction, detail preservation, effective haze removal, and maintaining color fidelity. To address these limitations, this paper introduces a novel Parallel Image-Dehazing Network (PID-Net). PID-Net uniquely combines a Convolutional Neural Network (CNN) for precise local feature extraction and a Vision Transformer (ViT) to capture global contextual information, overcoming the shortcomings of methods relying solely on either local or global features. A multi-scale CNN branch effectively extracts diverse local details through varying receptive fields, thereby enhancing the restoration of fine textures and details. To optimize the ViT component, a lightweight attention mechanism with CNN compensation is integrated, maintaining performance while minimizing the parameter count. Furthermore, a Redundant Feature Filtering Module is incorporated to filter out noise and haze-related artifacts, promoting the learning of subtle details. Our extensive experiments on public datasets demonstrated PID-Net’s significant superiority over state-of-the-art dehazing algorithms in both quantitative metrics and visual quality....
The motor control board has various defects such as inconsistent color differences, incorrect plug-in positions, solder short circuits, and more. These defects directly affect the performance and stability of the motor control board, thereby having a negative impact on product quality. Therefore, studying the defect detection technology of the motor control board is an important means to improve the quality control level of the motor control board. Firstly, the processing methods of digital images about the motor control board were studied, and the noise suppression methods that affect image feature extraction were analyzed. Secondly, a specific model for defect feature extraction and color difference recognition of the tested motor control board was established, and qualified or defective products were determined based on feature thresholds. Thirdly, the search algorithm for defective images was optimized. Finally, comparative experiments were conducted on the typical motor control board, and the experimental results demonstrate that the accuracy of the motor control board defect detection model-based on image processing established in this paper reached over 99%. It is suitable for timely image processing of large quantities of motor control boards on the production line, and achieved efficient defect detection. The defect detection method can not only be used for online detection of the motor control board defects, but also provide solutions for the integrated circuit board defect processing for the industry....
Image processing and computer vision are rapidly advancing technological fields that have been widely applied in medical imaging, autonomous driving, intelligent manufacturing, and other industries. With the rise of deep learning, traditional image processing methods have been gradually replaced by deep learning algorithms, achieving remarkable results in tasks such as object detection, image classification, and segmentation. This paper aims to explore the core algorithms and applications of image processing and computer vision, reviewing classical techniques and algorithms while analyzing advanced methods based on deep learning. By discussing applications across various fields, this paper not only demonstrates the current state of the technology but also highlights its challenges and developmental directions. Finally, it forecasts future research trends in image processing and computer vision, particularly the potential developments under the influence of artificial intelligence and big data....
Loading....